Goto

Collaborating Authors

 algorithmic activity


Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic Activity

Balch, Andrew

arXiv.org Artificial Intelligence

Algorithms play an increasingly-significant role in our social lives. Unfortunately, they often perpetuate social injustices while doing so. The popular means of addressing these algorithmic injustices has been through algorithmic reformism: fine-tuning the algorithm itself to be more fair, accountable, and transparent. While commendable, the emerging discipline of critical algorithm studies shows that reformist approaches have failed to curtail algorithmic injustice because they ignore the power structure surrounding algorithms. Heeding calls from critical algorithm studies to analyze this power structure, I employ a framework developed by Erik Olin Wright to examine the configuration of power surrounding Algorithmic Activity: the ways in which algorithms are researched, developed, trained, and deployed within society. I argue that the reason Algorithmic Activity is unequal, undemocratic, and unsustainable is that the power structure shaping it is one of economic empowerment rather than social empowerment. For Algorithmic Activity to be socially just, we need to transform this power configuration to empower the people at the other end of an algorithm. To this end, I explore Wright's symbiotic, interstitial, and raptural transformations in the context of Algorithmic Activity, as well as how they may be applied in a hypothetical research project that uses algorithms to address a social issue. I conclude with my vision for socially just Algorithmic Activity, asking that future work strives to integrate the proposed transformations and develop new mechanisms for social empowerment.


AI responsibility: Taming the algorithm

#artificialintelligence

We've reached a point where human (cognitive) task performance is being leveraged or even replaced by AI. So who or what is responsible for what this AI does? While the question seems simple enough, legal answers from the field are apparently opaque and embroiled. This is caused by the fact that AI is performing human-like tasks without having the clear legal accountability of one, and the question is whether it should have any. Fortunately, now that machine learning and artificial intelligence are protruding on an ever-increasing amount of practical domains, real-world legal interpretations and guiding principles are forming around the topic.